Skip to content

Remove getEnvEnablePDL in favor of enable_pdl parameter #1446

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
Aug 13, 2025

Conversation

yongwww
Copy link
Collaborator

@yongwww yongwww commented Aug 10, 2025

πŸ“Œ Description

πŸ” Related Issues

πŸš€ Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.

βœ… Pre-commit Checks

  • I have installed pre-commit by running pip install pre-commit (or used your preferred method).
  • I have installed the hooks with pre-commit install.
  • I have run the hooks manually with pre-commit run --all-files and fixed any reported issues.

If you are unsure about how to set up pre-commit, see the pre-commit documentation.

πŸ§ͺ Tests

  • Tests have been added or updated as needed.
  • All tests are passing (unittest, etc.).

Reviewer Notes

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @yongwww, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

I've refactored the mechanism for enabling Programmatic Dependent Launch (PDL) within the codebase. Previously, this feature was controlled by checking a global environment variable. Now, I've removed that environment variable check and instead introduced an explicit enable_pdl boolean parameter. This parameter is passed directly to all relevant CUDA kernel launch functions and associated utility calls across various components, including MoE, quantization, and GEMM operations. This change makes the control over PDL more explicit, localized, and easier to manage programmatically, improving clarity and reducing reliance on global state.

Highlights

  • Removal of Global Environment Variable Check: The global getEnvEnablePDL function and its declarations have been completely removed from the codebase.
  • Introduction of Explicit enable_pdl Parameter: A new enable_pdl boolean parameter has been added to numerous CUDA kernel launch functions and related helper functions across various modules, including Mixture-of-Experts (MoE) operations, quantization routines, and GEMM (General Matrix Multiply) implementations.
  • Struct Updates for PDL Control: The TmaWarpSpecializedGroupedGemmInput and TllmGenFmhaRunnerParams structs have been updated to include the enable_pdl flag, allowing this setting to be carried through data structures.
  • Propagation of PDL Control Through Call Stack: The enable_pdl parameter is now explicitly propagated through the entire call stack, from Python wrappers down to the underlying CUDA kernel launches, ensuring direct control over the Programmatic Dependent Launch feature.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with πŸ‘ and πŸ‘Ž on @gemini-code-assist comments or fill out our survey to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the codebase to remove the dependency on the getEnvEnablePDL function, replacing it with an explicit enable_pdl parameter. This is a positive change that improves code clarity and testability by making dependencies explicit. The changes are extensive and mostly mechanical. I've identified a couple of critical issues where the new parameter was not propagated correctly through the call stack, which would result in compilation errors. Please see the detailed comments for suggestions on how to fix these issues.

@@ -3005,7 +3007,7 @@ void CutlassMoeFCRunner<T, WeightType, OutputType, InputType, BackBoneType, Enab
static_cast<UnfusedGemmOutputType const*>(gemm_output), fc2_fp8_quant, fc1_expert_biases,
bias_is_broadcast, expert_first_token_offset, num_experts_per_node, inter_size,
expanded_num_rows, fc1_activation_type, quant_params, use_per_expert_act_scale,
fc2_fp4_act_flat, stream);
fc2_fp4_act_flat, stream, enable_pdl);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The enable_pdl variable is used here, but it's not defined within the scope of gemm1. This will cause a compilation error. This issue also exists for gemm2 and its callees. The enable_pdl parameter needs to be plumbed through several functions.

Specifically, you need to:

  1. Add bool enable_pdl to the function signatures of CutlassMoeFCRunner::gemm1 and CutlassMoeFCRunner::gemm2 in csrc/nv_internal/tensorrt_llm/kernels/cutlass_kernels/include/moe_kernels.h and their implementations in this file.
  2. Pass enable_pdl from runMoe to gemm1 and gemm2.
  3. If BlockScaleFC1 and BlockScaleFC2 are called from gemm1 and gemm2 respectively, you'll need to add bool enable_pdl to their signatures and pass it down as well.

@@ -4593,7 +4601,7 @@

prepareRouting(num_tokens, workspace_ptr_char, stream);
prepareQuantParams(num_tokens, workspace_ptr_char, stream);
prepareTmaWsInputs(num_tokens, workspace_ptr_char, expert_weights, stream);
prepareTmaWsInputs(num_tokens, workspace_ptr_char, expert_weights, stream, false);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The call to prepareRouting on line 4602 is missing the new enable_pdl argument. The function signature for prepareRouting was updated to include bool enable_pdl, but this call site was not updated, which will cause a compilation error. You should probably pass false here, similar to this call to prepareTmaWsInputs.

@@ -259,7 +259,8 @@ void buildMinLatencyActiveExpertMaps(int* num_active_experts_per_node,
int const experts_per_token, int const start_expert,
int const end_expert, int const num_experts_per_node,
int const cluster_rank, int const cluster_size,
int const num_experts_smem, cudaStream_t const stream) {
int const num_experts_smem, cudaStream_t const stream,
bool enable_pdl = false) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

stream should be the last argument, and please do not set a default value for enable_pdl

@zhyncs
Copy link
Member

zhyncs commented Aug 12, 2025

@yongwww qq is this pr ready?

@yzh119 yzh119 marked this pull request as ready for review August 13, 2025 04:33
@@ -275,6 +275,8 @@ struct TllmGenFmhaRunnerParams {
float mScaleSfO;
// The cuda stream.
cudaStream_t stream;
// Whether to enable PDL (Programmatic Dependent Launch).
bool enable_pdl = false;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't set default value here, it doesn't work, we will be memset the entire POD anyways in the constructors:
https://github.com/flashinfer-ai/flashinfer/pull/1446/files#diff-878c76ab5e42dac3beb66b2da51e8d72a9b994d44dee598f251fefdcad8b11dcR304

and for hopper/blackwell kernels, it should default to True, always.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

@nvpohanh
Copy link
Contributor

@yongwww @yzh119 I see that these only changed MoE-related kernels. Do we plan to make the same change for other kernels like Attn/Gemm/AR+Norm/activation/etc.?

@yzh119
Copy link
Collaborator

yzh119 commented Aug 13, 2025

@nvpohanh not only moe kernels, all dependency of getEnvEnablePDL (including trtllm-gen's batched gemm and attention) have been removed.

AR+Norm/activation

These operators do not rely on getEnvEnablePDL and we use enable_pdl when they were introduced to flashinfer from the beginning.

Copy link
Collaborator

@yzh119 yzh119 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CI is temporarily broken, run blackwell tests locally and all UT passed.

Thanks for your contribution @cyx-6 @yongwww !

@yzh119 yzh119 merged commit 3628a54 into flashinfer-ai:main Aug 13, 2025
1 check passed
@yongwww yongwww deleted the remove_pdl_env branch August 13, 2025 16:41
@IwakuraRein
Copy link
Contributor

@yzh119 @yongwww core.py:1442 is lacking the enable_pdl. I've created #1480 to fix it.

@yyihuang yyihuang mentioned this pull request Aug 14, 2025
5 tasks
yyihuang added a commit that referenced this pull request Aug 14, 2025
<!-- .github/pull_request_template.md -->

## πŸ“Œ Description

Following #1446.

## πŸ” Related Issues

<!-- Link any related issues here -->

## πŸš€ Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull
request, please make sure the following items are complete.

### βœ… Pre-commit Checks

- [x] I have installed `pre-commit` by running `pip install pre-commit`
(or used your preferred method).
- [x] I have installed the hooks with `pre-commit install`.
- [x] I have run the hooks manually with `pre-commit run --all-files`
and fixed any reported issues.

> If you are unsure about how to set up `pre-commit`, see [the
pre-commit documentation](https://pre-commit.com/).

## πŸ§ͺ Tests

- [x] Tests have been added or updated as needed.
- [x] All tests are passing (`unittest`, etc.).

## Reviewer Notes

<!-- Optional: anything you'd like reviewers to focus on, concerns, etc.
-->

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
yyihuang added a commit that referenced this pull request Aug 14, 2025
<!-- .github/pull_request_template.md -->

## πŸ“Œ Description

Fix the missing `enable_pdl` argument introduced in #1446 .

## πŸ” Related Issues

<!-- Link any related issues here -->

## πŸš€ Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull
request, please make sure the following items are complete.

### βœ… Pre-commit Checks

- [x] I have installed `pre-commit` by running `pip install pre-commit`
(or used your preferred method).
- [x] I have installed the hooks with `pre-commit install`.
- [x] I have run the hooks manually with `pre-commit run --all-files`
and fixed any reported issues.

> If you are unsure about how to set up `pre-commit`, see [the
pre-commit documentation](https://pre-commit.com/).

## πŸ§ͺ Tests

- [ ] Tests have been added or updated as needed.
- [ ] All tests are passing (`unittest`, etc.).

## Reviewer Notes

<!-- Optional: anything you'd like reviewers to focus on, concerns, etc.
-->

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: Avery Yingyi Huang <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants